Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
SAT-based impossible differential cryptanalysis of GRANULE cipher
Xiaonian WU, Jing KUANG, Runlian ZHANG, Lingchen LI
Journal of Computer Applications    2024, 44 (3): 797-804.   DOI: 10.11772/j.issn.1001-9081.2023040435
Abstract108)   HTML0)    PDF (902KB)(65)       Save

The Boolean SATisfiability problem (SAT)-based automated search methods can directly describe logical operations such as AND, OR, NOT, XOR, and establish more efficient search models. In order to efficiently evaluate the ability of GRANULE cipher to resist impossible differential attacks, firstly, the SAT model described by the S-box differential property was optimized based on the S-box differential distribution table property. Then, the SAT model of bit-oriented impossible differential distinguisher was established for GRANULE cipher, and multiple 10-round impossible differential distinguishers of GRANULE cipher were obtained by solving the SAT model. Furthermore, an improved SAT automated verification method was given, by which the impossible differential distinguishers were verified. Finally, 16-round impossible differential attack was performed on GRANULE-64/80 cipher, where the impossible differential distinguisher was further extended forward 3-round and backward 3-round respectively. As a result, 80-bit master key was recovered with the time complexity of 251.8 16-round encryptions and the data complexity of 241.8 chosen-plaintexts. Compared with the suboptimal results for impossible differential cryptanalysis of the GRANULE cipher, the number of distinguisher rounds and key recovery attack rounds obtained are improved by 3 rounds, and the time complexity and data complexity are further reduced.

Table and Figures | Reference | Related Articles | Metrics
Chinese homophonic neologism discovery method based on Pinyin similarity
Hanchen LI, Shunxiang ZHANG, Guangli ZHU, Tengke WANG
Journal of Computer Applications    2023, 43 (9): 2715-2720.   DOI: 10.11772/j.issn.1001-9081.2022091390
Abstract321)   HTML11)    PDF (927KB)(211)       Save

As one of the basic tasks of natural language processing, new word identification provides theoretical support for the establishment of Chinese dictionary and analysis of word sentiment tendency. However, the current new word identification methods do not consider the homophonic neologism identification, resulting in low precision of homophonic neologism identification. To solve this problem, a Chinese homophonic neologism discovery method based on Pinyin similarity was proposed, and the precision of homophonic neologism identification was improved by introducing the phonetic comparison of new and old words in this method. Firstly, the text was preprocessed, the Average Mutual Information (AMI) was calculated to determine the degree of internal cohesion of candidate words, and the improved branch entropy was used to determine the boundaries of candidate new words. Then, the retained words were transformed into Chinese Pinyin with similar pronunciations and compared to the Chinese Pinyin of the old words in the Chinese dictionary, and the most similar results of comparisons would be retained. Finally, if a comparison result exceeded the threshold, the new word in the result was taken as the homophonic neologism, and its corresponding word was taken as the original word. Experimental results on self built Weibo datasets show that compared with BNshCNs (Blended Numeric and symbolic homophony Chinese Neologisms) and DSSCNN (similarity computing model based on Dependency Syntax and Semantics), the proposed method has the precision, recall and F1 score improved by 0.51 and 5.27 percentage points, 2.91 and 6.31 percentage points, 1.75 and 5.81 percentage points respectively, indicating that the proposed method has better Chinese homophonic neologism identification effect.

Table and Figures | Reference | Related Articles | Metrics
Quantum K-Means algorithm based on Hamming distance
Jing ZHONG, Chen LIN, Zhiwei SHENG, Shibin ZHANG
Journal of Computer Applications    2023, 43 (8): 2493-2498.   DOI: 10.11772/j.issn.1001-9081.2022091469
Abstract316)   HTML34)    PDF (1623KB)(432)       Save

The K-Means algorithms typically utilize Euclidean distance to calculate the similarity between data points when dealing with large-scale heterogeneous data. However, this method has problems of low efficiency and high computational complexity. Inspired by the significant advantage of Hamming distance in handling data similarity calculation, a Quantum K-Means Hamming (QKMH) algorithm was proposed to calculate similarity. First, the data was prepared and made into quantum state, and the quantum Hamming distance was used to calculate similarity between the points to be clustered and the K cluster centers. Then, the Grover’s minimum search algorithm was improved to find the cluster center closest to the points to be clustered. Finally, these steps were repeated until the designated number of iterations was reached or the clustering centers no longer changed. Based on the quantum simulation computing framework QisKit, the proposed algorithm was validated on the MNIST handwritten digit dataset and compared with various traditional and improved methods. Experimental results show that the F1 score of the QKMH algorithm is improved by 10 percentage points compared with that of the Manhattan distance-based quantum K-Means algorithm and by 4.6 percentage points compared with that of the latest optimized Euclidean distance-based quantum K-Means algorithm, and the time complexity of the QKMH algorithm is lower than those of the above comparison algorithms.

Table and Figures | Reference | Related Articles | Metrics
Research progress in public-key encryption with keyword search
Wenshuai SONG, Miaolei DENG, Mimi MA, Haochen LI
Journal of Computer Applications    2023, 43 (3): 794-803.   DOI: 10.11772/j.issn.1001-9081.2022020234
Abstract367)   HTML19)    PDF (1835KB)(203)       Save

With the continuous development of big data and cloud computing technology, cloud platforms have become the first choice for massive data storage, and user data privacy and security has become one of the most important issues in cloud computing environment. In order to ensure security of data, users usually encrypt sensitive data and then store it in cloud servers. And how to efficiently retrieve ciphertext data on the cloud becomes a challenge. Searchable encryption technology provides an effective method for solving efficient retrieval of ciphertext by allowing users to directly retrieve ciphertext data through keywords, which protects data privacy while reducing communication and computing overhead. In recent years, in order to cope with different platforms and application scenarios, Public-key Encryption with Keyword Search (PEKS) technology has produced a large number of extension schemes based on different difficult problems, query methods, and changing structures. For security extensions and functional extensions, PEKS extension schemes were reviewed in terms of permission sharing, key management issues, fine-grained search and access control capabilities of current application requirements, and the performance of the specifically described solutions were compared and analyzed in depth, pointing out the advantages and shortcomings. Finally, the development trends of PEKS technology was summarized and prospected.

Table and Figures | Reference | Related Articles | Metrics
Improved instruction obfuscation framework based on obfuscator low level virtual machine
Yayi WANG, Chen LIU, Tianbo HUANG, Weiping WEN
Journal of Computer Applications    2023, 43 (2): 490-498.   DOI: 10.11772/j.issn.1001-9081.2021122234
Abstract435)   HTML22)    PDF (2140KB)(204)       Save

Focusing on the issue that only one instruction substitution with 5 operators and 13 substitution schemes is supported in Obfuscator Low Level Virtual Machine (OLLVM) at the instruction obfuscation level, an improved instruction obfuscation framework InsObf was proposed. InsObf, including junk code insertion and instruction substitution, was able to enhance the obfuscation effect at the instruction level based on OLLVM. For junk code insertion, firstly, the dependency of the instruction inside the basic block was analyzed, and then two kinds of junk code, multiple jump and bogus loop, were inserted to disrupt the structure of the basic block. For instruction substitution, based on OLLVM, it was expanded to 13 operators, with 52 instruction substitution schemes. The framework prototype was implemented on Low Level Virtual Machine (LLVM). Experimental results show that compared to OLLVM, InsObf has the cyclomatic complexity and resilience increased by almost four times, with a time cost of about 10 percentage points and a space cost of about 20 percentage points higher. Moreover, InsObf can provide higher code complexity compared to Armariris and Hikari, which are also improved on the basis of OLLVM, at the same order of magnitude of time and space costs. Therefore, InsObf can provide effective protection at the instruction level.

Table and Figures | Reference | Related Articles | Metrics
UAV path planning for persistent monitoring based on value function iteration
Chen LIU, Yang CHEN, Hao FU
Journal of Computer Applications    2023, 43 (10): 3290-3296.   DOI: 10.11772/j.issn.1001-9081.2022091464
Abstract197)   HTML3)    PDF (2422KB)(102)       Save

The use of Unmanned Aerial Vehicle (UAV) to continuously monitor designated areas can play a role in deterring invasion and damage as well as discovering abnormalities in time, but the fixed monitoring rules are easy to be discovered by the invaders. Therefore, it is necessary to design a random algorithm for UAV flight path. In view of the above problem, a UAV persistent monitoring path planning algorithm based on Value Function Iteration (VFI) was proposed. Firstly, the state of the monitoring target point was selected reasonably, and the remaining time of each monitoring node was analyzed. Secondly, the value function of the corresponding state of this monitoring target point was constructed by combining the reward/penalty benefit and the path security constraint. In the process of the VFI algorithm, the next node was selected randomly based on ε principle and roulette selection. Finally, with the goal that the growth of the value function of all states tends to be saturated, the UAV persistent monitoring path was solved. Simulation results show that the proposed algorithm has the obtained information entropy of 0.905 0, and the VFI running time of 0.363 7 s. Compared with the traditional Ant Colony Optimization (ACO), the proposed algorithm has the information entropy increased by 216%, and the running time decreased by 59%,both randomness and rapidity have been improved. It is verified that random UAV flight path is of great significance to improve the efficiency of persistent monitoring.

Table and Figures | Reference | Related Articles | Metrics
Tire defect detection method based on improved Faster R-CNN
WU Zeju, JIAO Cuijuan, CHEN Liang
Journal of Computer Applications    2021, 41 (7): 1939-1946.   DOI: 10.11772/j.issn.1001-9081.2020091488
Abstract551)      PDF (1816KB)(543)       Save
The defects such as sidewall foreign matter, crown foreign body, air bubble, crown split and sidewall root opening that appear in the process of tire production will affect the use of tires after leaving factory, so it is necessary to carry out nondestructive testing on each tire before leaving the factory. In order to achieve automatic detection of tire defects in industry, an automatic tire defect detection method based on improved Faster Region-Convolutional Neural Network (Faster R-CNN) was proposed. Firstly, at the preprocessing stage, the gray level of tire image was stretched by the histogram equalization method to enhance the contrast of the dataset, resulting in a significant difference between gray values of the image target and the background. Secondly, to improve the accuracy of position detection and identification of tire defects, the Faster R-CNN structure was improved. That is the convolution features of the third layer and the convolution features of the fifth layer in ZF (Zeiler and Fergus) convolutional neural network were combined together and output as the input of the regional proposal network layer. Thirdly, the Online Hard Example Mining (OHEM) algorithm was introduced after the RoI (Region-of-Interesting) pooling layer to further improve the accuracy of defect detection. Experimental results show that the tire X-ray image defects can be classified and located accurately by the improved Faster R-CNN defect detection method with average test recognition of 95.7%. In addition, new detection models can be obtained by fine-tuning the network to detect other types of defects..
Reference | Related Articles | Metrics
Attention fusion network based video super-resolution reconstruction
BIAN Pengcheng, ZHENG Zhonglong, LI Minglu, HE Yiran, WANG Tianxiang, ZHANG Dawei, CHEN Liyuan
Journal of Computer Applications    2021, 41 (4): 1012-1019.   DOI: 10.11772/j.issn.1001-9081.2020081292
Abstract392)      PDF (2359KB)(753)       Save
Video super-resolution methods based on deep learning mainly focus on the inter-frame and intra-frame spatio-temporal relationships in the video, but previous methods have many shortcomings in the feature alignment and fusion of video frames, such as inaccurate motion information estimation and insufficient feature fusion. Aiming at these problems, a video super-resolution model based on Attention Fusion Network(AFN) was constructed with the use of the back-projection principle and the combination of multiple attention mechanisms and fusion strategies. Firstly, at the feature extraction stage, in order to deal with multiple motions between neighbor frames and reference frame, the back-projection architecture was used to obtain the error feedback of motion information. Then, a temporal, spatial and channel attention fusion module was used to perform the multi-dimensional feature mining and fusion. Finally, at the reconstruction stage, the obtained high-dimensional features were convoluted to reconstruct high-resolution video frames. By learning different weights of features within and between video frames, the correlations between video frames were fully explored, and an iterative network structure was adopted to process the extracted features gradually from coarse to fine. Experimental results on two public benchmark datasets show that AFN can effectively process videos with multiple motions and occlusions, and achieves significant improvements in quantitative indicators compared to some mainstream methods. For instance, for 4-times reconstruction task, the Peak Signal-to-Noise Ratio(PSNR) of the frame reconstructed by AFN is 13.2% higher than that of Frame Recurrent Video Super-Resolution network(FRVSR) on Vid4 dataset and 15.3% higher than that of Video Super-Resolution network using Dynamic Upsampling Filter(VSR-DUF) on SPMCS dataset.
Reference | Related Articles | Metrics
Video person re-identification based on non-local attention and multi-feature fusion
LIU Ziyan, ZHU Mingcheng, YUAN Lei, MA Shanshan, CHEN Lingzhouting
Journal of Computer Applications    2021, 41 (2): 530-536.   DOI: 10.11772/j.issn.1001-9081.2020050739
Abstract399)      PDF (1057KB)(389)       Save
Aiming at the fact that the existing video person re-identification methods cannot effectively extract the spatiotemporal information between consecutive frames of the video, a person re-identification network based on non-local attention and multi-feature fusion was proposed to extract global and local representation features and time series information. Firstly, the non-local attention module was embedded to extract global features. Then, the multi-feature fusion was realized by extracting the low-level and middle-level features as well as the local features, so as to obtain the salient features of the person. Finally, the similarity measurement and sorting were performed to the person features in order to calculate the accuracy of video person re-identification. The proposed model has significantly improved performance compared to the existing Multi-scale 3D Convolution (M3D) and Learned Clip Similarity Aggregation (LCSA) models with the mean Average Precision (mAP) reached 81.4% and 93.4% respectively and the Rank-1 reached 88.7% and 95.3% respectively on the large datasets MARS and DukeMTMC-VideoReID. At the same time, the proposed model has the Rank-1 reached 94.8% on the small dataset PRID2011.
Reference | Related Articles | Metrics
Service composition optimization based on improved krill herd algorithm
Shuicong LIAO, Peng SUN, Xingchen LIU, Yun ZHONG
Journal of Computer Applications    2021, 41 (12): 3652-3657.   DOI: 10.11772/j.issn.1001-9081.2021040699
Abstract288)   HTML6)    PDF (703KB)(63)       Save

In the Service Oriented Architecture (SOA), an improved Krill Herd algorithm PRKH with adaptive crossover and random perturbation operator was proposed to solve the problem of easily falling into local optimum and high time cost in the process of service composition optimization. Firstly, a service composition optimization model was established based on Quality of Service (QoS), and the QoS calculation formulas and normalization methods under different structures were given. Then, based on the Krill Herd (KH) algorithm, the adaptive crossover probability and the random disturbance based on the actual offset were added to achieve a good balance between the global search ability and the local search ability of krill herd. Finally, through simulation, the proposed algorithm was compared with KH algorithm, Particle Swarm Optimization (PSO) algorithm, Artificial Bee Colony (ABC) algorithm and Flower Pollination Algorithm (FPA). Experimental results show that the PRKH algorithm can find better QoS composite services faster.

Table and Figures | Reference | Related Articles | Metrics
Video-based person re-identification method by jointing evenly sampling-random erasing and global temporal feature pooling
CHEN Li, WANG Hongyuan, ZHANG Yunpeng, CAO Liang, YIN Yuchang
Journal of Computer Applications    2021, 41 (1): 164-169.   DOI: 10.11772/j.issn.1001-9081.2020060909
Abstract349)      PDF (1012KB)(370)       Save
In order to solve the problem of low accuracy of video-based person re-identification caused by factors such as occlusion, background interference, and person appearance and posture similarity in video surveillance, a video-based person re-identification method of Evenly Sampling-random Erasing (ESE) and global temporal feature pooling was proposed. Firstly, aiming at the situation where the object person is disturbed or partially occluded, a data enhancement method of evenly sampling-random erasing was adopted to effectively alleviate the occlusion problem, improving the generalization ability of the model, so as to more accurately match the person. Secondly, to further improve the accuracy of video-based person re-identification, and learn more discriminative feature representations, a 3D Convolutional Neural Network (3DCNN) was used to extract temporal and spatial features. And a Global Temporal Feature Pooling (GTFP) layer was added to the network before the output of person feature representations, so as to ensure the obtaining of spatial information of the context, and refine the intra-frame temporal information. Lots of experiments conducted on three public video datasets, MARS, DukeMTMC-VideoReID and PRID-201l, prove that the method of jointing evenly sampling-random erasing and global temporal feature pooling is competitive compared with some state-of-the-art video-based person re-identification methods.
Reference | Related Articles | Metrics
Sentiment classification of incomplete data based on bidirectional encoder representations from transformers
LUO Jun, CHEN Lifei
Journal of Computer Applications    2021, 41 (1): 139-144.   DOI: 10.11772/j.issn.1001-9081.2020061066
Abstract395)      PDF (921KB)(873)       Save
Incomplete data, such as the interactive information on social platforms and the review contents in Internet movie datasets, widely exist in the real life. However, most existing sentiment classification models are built on the basis of complete data, without considering the impact of incomplete data on classification performance. To address this problem, a stacked denoising neural network model based on BERT (Bidirectional Encoder Representations from Transformers) was proposed for sentiment classification of incomplete data. This model was composed of two components:Stacked Denoising AutoEncoder (SDAE) and BERT. Firstly, the incomplete data processed by word-embedding was fed to the SDAE for denoising training in order to extract deep features to reconstruct the feature representation of the missing words and wrong words. Then, the obtained output was passed into the BERT pre-training model to further improve the feature vector representation of the words by refining. Experimental results on two commonly used sentiment datasets demonstrate that the proposed method has the F1 measure and classification accuracy in incomplete data classification improved by about 6% and 5% respectively, thus verifying the effectiveness of the proposed model.
Reference | Related Articles | Metrics
Background subtraction based on tensor nuclear norm and 3D total variation
CHEN Lixia, BAN Ying, WANG Xuewen
Journal of Computer Applications    2020, 40 (9): 2737-2742.   DOI: 10.11772/j.issn.1001-9081.2020010005
Abstract461)      PDF (950KB)(476)       Save
Concerning the fact that common background subtraction methods ignore the spatio-temporal continuity of foreground and the disturbance of dynamic background to foreground extraction, an improved background subtraction model was proposed based on Tensor Robust Principal Component Analysis (TRPCA). The improved tensor nuclear norm was used to constrain the background, which enhanced the low rank of background and retained the spatial information of videos. Then the regularization constraint was performed to the foreground by 3D Total Variation (3D-TV), so as to consider the spatio-temporal continuity of object and effectively suppress the interference of dynamic background and target movement on the foreground extraction. Experimental results show that the proposed model can effectively separate the foreground and background of videos. Compared with High-order Robust Principal Component Analysis (HoRPCA), Tensor Robust Principal Component Analysis with Tensor Nuclear Norm (TRPCA-TNN) and Kronecker-Basis-Representation based Robust Principal Component Analysis (KBR-RPCA), the proposed algorithm has the F-measure values all optimal or sub-optimal. It can be seen that, the proposed model effectively improves the accuracy of foreground and background separation, and suppresses the interference of complex weather and target movement on foreground extraction.
Reference | Related Articles | Metrics
Recommendation algorithm based on modularity and label propagation
SHENG Jun, LI Bin, CHEN Ling
Journal of Computer Applications    2020, 40 (9): 2606-2612.   DOI: 10.11772/j.issn.1001-9081.2020010095
Abstract412)      PDF (1025KB)(328)       Save
To solve the problem of commodity recommendation based on network information, a recommendation algorithm based on community mining and label propagation on bipartite network was proposed. Firstly, a weighted bipartite graph was used to represent the user-item scoring matrix, and the label propagation technology was adopted to perform the community mining to the bipartite network. Then, the items which the users might be interested in were mined based on the community structure information of the bipartite network and by making full use of the similarity between the communities that the users in as well as the similarity between items and the similarity between the users. Finally, the item recommendation was performed to the users. The experimental results on real world networks show that, compared with the Collaborative Filtering recommendation algorithm based on item rating prediction using Bidirectional Association Rules (BAR-CF), the Collaborative Filtering recommendation algorithm based on Item Rating prediction (IR-CF), user Preferences prediction method based on network Link Prediction (PLP) and Modified User-based Collaborative Filtering (MU-CF), the proposed algorithm has the Mean Absolute Error (MAE) 0.1 to 0.3 lower, and the precision 0.2 higher. Therefore, the proposed algorithm can obtain recommendation results with higher quality compared to other similar methods.
Reference | Related Articles | Metrics
Nonlinear systems identification based on structural adaptive filtering method
FENG Zikai, CHEN Lijia, LIU Mingguo, YUAN Meng’en
Journal of Computer Applications    2020, 40 (8): 2319-2326.   DOI: 10.11772/j.issn.1001-9081.2019111996
Abstract422)      PDF (2796KB)(447)       Save
In order to solve the problems of high identification limitation and low identification rate in nonlinear system identification with fixed structure and parameters, a Subsystem-based Structural Adaptive Filtering (SSAF) method for nonlinear system identification was proposed with introducing structural adaptation into the optimization of identification. Multiple subsystems with linear-nonlinear hybrid structure were cascaded to form the model for this method. The linear part is a 1-order or 2-order Infinite Impulse Response (IIR) digital filter with uncertain parameters, and the nonlinear part is a static nonlinear function. In the initial stage, the parameters of the subsystems were randomly generated, and the generated subsystems were connected randomly according to the set connection rules, and the effectiveness of the nonlinear system was guaranteed by the connection mechanism with no feedback branches. An Adaptive Multiple-Elites-guided Composite Differential Evolution with a shift mechanism(AMECoDEs) algorithm was used for loop optimization of the adaptive model until the optimal structure and parameters were found, that is, the global optimal. The simulation results show that AMECoDEs performs well on nonlinear test functions and real data sets with high identification rate and good convergence rate. Compared with the Focused Time Lagged Recurrent Neural Network (FTLRNN), the number of parameters used in SSAF is reduced to 1/10, and the accuracy of fitness is improved by 7%, which proves the effectiveness of the proposed method.
Reference | Related Articles | Metrics
Intelligent traffic sign recognition method based on capsule network
CHEN Lichao, ZHENG Jiamin, CAO Jianfang, PAN Lihu, ZHANG Rui
Journal of Computer Applications    2020, 40 (4): 1045-1049.   DOI: 10.11772/j.issn.1001-9081.2019091610
Abstract515)      PDF (864KB)(603)       Save
The scalar neurons of convolutional neural networks cannot express the feature location information,and have poor adaptability to the complex vehicle driving environment,resulting in low traffic sign recognition rate. Therefore,an intelligent traffic sign recognition method based on capsule network was proposed. Firstly,the very deep convolutional neural network was used to improve the feature extraction part. Then,a pooling layer was introduced in the main capsule layer. Finally,the movement index average method was used for improving the dynamic routing algorithm. The test results on the GTSRB dataset show that the improved capsule network method improves the recognition accuracy in special scenes by 10. 02 percentage points. Compared with the traditional convolutional neural network,the proposed method has the recognition time for single image decreased by 2. 09 ms. Experimental results show that the improved capsule network method can meet the requirement of accurate and real-time traffic sign recognition.
Reference | Related Articles | Metrics
Strategy with low redundant computation for reachability query preserving graph compression
Danfeng ZHAO, Junchen LIN, Wei SONG, Jian WANG, Dongmei HUANG
Journal of Computer Applications    2020, 40 (2): 510-517.   DOI: 10.11772/j.issn.1001-9081.2019091666
Abstract421)   HTML0)    PDF (634KB)(275)       Save

Since some computation in reachability Query Preserving Graph Compression (QPGC) algorithm are redundant, a high-performance compression strategy was proposed. In the stage of solving the vertex sets of ancestors and descendants, an algorithm named TSB (Topological Sorting Based algorithm for solving ancestor and descendant sets) was proposed for common graph data. Firstly, the vertices of the graph data were topological sorted. Then, the vertex sets were solved in the order or backward order of the topological sequence, avoiding the redundant computation caused by the ambiguous solution order. And an algorithm based on graph aggregation operation was proposed for graph data with short longest path, namely AGGB (AGGregation Based algorithm for solving ancestor and descendant sets), so the vertex sets were able to be solved in a certain number of aggregation operations. In the stage of solving reachability equivalence class, a Piecewise Statistical Pruning (PSP) algorithm was proposed. Firstly, piecewise statistics of ancestors and descendants sets were obtained and then the statistics were compared to achieve the coarse matching, and some unnecessary fine matches were pruned off. Experimental results show that compared with QPGC algorithm: in the stage of solving the vertex sets of ancestors and descendants, TSB and AGGB algorithm have the performance averagely increased by 94.22% and 90.00% respectively on different datasets; and in the stage of solving the reachability equivalence class, PSP algorithm has the performance increased by more than 70% on most datasets. With the increasing of the dataset, using TSB and AGGB cooperated with PSP has the performance improved by nearly 28 times. Theoretical analysis and simulation results show that the proposed strategy has less redundant computation and faster compression speed compared to QPGC.

Table and Figures | Reference | Related Articles | Metrics
Lightweight convolutional neural network based on cross-channel fusion and cross-module connection
CHEN Li, DING Shifei, YU Wenjia
Journal of Computer Applications    2020, 40 (12): 3451-3457.   DOI: 10.11772/j.issn.1001-9081.2020060882
Abstract483)      PDF (1104KB)(610)       Save
In order to solve the problems of too many parameters and high computational complexity of traditional convolutional neural networks, a lightweight convolutional neural network architecture named C-Net based on cross-channel fusion and cross-module connection was proposed. Firstly, a method called cross-channel fusion was proposed. With it, the shortcoming of lacking information flow between different groups of grouped convolution was solved to a certain extent, and the information communication between different groups was realized efficiently and easily. Then, a method called cross-module connection was proposed. With it, the shortcoming that the basic building blocks in the traditional lightweight architecture were independent to each other was overcome, and the information fusion between different modules with the same resolution feature mapping within the same stage was achieved, enhancing the feature extraction capability. Finally, a novel lightweight convolutional neural network architecture C-Net was designed based on the two proposed methods. The accuracy of C-Net on the Food_101 dataset is 69.41%, and the accuracy of C-Net on the Caltech_256 dataset is 63.93%. Experimental results show that C-Net reduces the memory cost and computational complexity in comparison with the state-of-the-art lightweight convolutional neural network models. The ablation experiment verifies the effectiveness of the two proposed methods on the Cifar_10 dataset.
Reference | Related Articles | Metrics
Vehicle classification based on HOG-C CapsNet in traffic surveillance scenarios
CHEN Lichao, ZHANG Lei, CAO Jianfang, ZHANG Rui
Journal of Computer Applications    2020, 40 (10): 2881-2889.   DOI: 10.11772/j.issn.1001-9081.2020020152
Abstract295)      PDF (3651KB)(318)       Save
To improve the performance of vehicle classification by making full use of image information from traffic surveillance, Histogram of Oriented Gradient Convolutional (HOG-C) features extraction method was added on the capsule network, and a Capsule Network model fusing with HOG-C features (HOG-C CapsNet) was proposed. Firstly, the gradient data in the images were calculated by the gradient statistical feature extraction layer, and then the Histogram of Oriented Gradient (HOG) feature map was plotted. Secondly, the color information of the image was extracted by the convolutional layer, and then the HOG-C feature map was plotted with the extracted color information and HOG feature map. Finally, the HOG feature map was input into to the convolutional layer extract its abstract features, and the abstract features were encapsulated through a capsule network into capsules with the three-dimensional spatial feature representation, so as to realize the vehicle classification by dynamic routing algorithm. Compared with other related models on the BIT-Vehicle dataset, the proposed model has the accuracy of 98.17%, the Mean Average Precision (MAP) of 97.98%, the Mean Average Recall (MAR) of 98.42% and the comprehensive evaluation index of 98.20%. Experimental results show that the vehicle classification in traffic surveillance scenarios can be achieved with better performance by using HOG-C CapsNet.
Reference | Related Articles | Metrics
Path planning algorithm of multi-population particle swarm manipulator based on monocular vision
YUAN Meng'en, CHEN Lijia, FENG Zikai
Journal of Computer Applications    2020, 40 (10): 2863-2871.   DOI: 10.11772/j.issn.1001-9081.2020020145
Abstract296)      PDF (1773KB)(428)       Save
Aiming at the path planning problem of manipulator with complex static background and multiple constraints, a new multi-population particle swarm optimization algorithm based on elite population and monocular vision was proposed. Firstly, the image difference algorithm was used to eliminate the background, then the contour surrounding method was used to find out the target area, and the model pose estimation method was used to locate the target position. Secondly, a multi-population particle swarm optimization based on elite population was proposed to obtain the optimal angles of the manipulator according to the target position. In this algorithm, the elite population and the sub-populations were combined to form the multi-population particle swarm, and the pre-selection and interaction mechanisms were used to make the algorithm jump out of local optimums. The simulation results show that compared with the real coordinates, the coordinates error of the object position obtained by background elimination method is small; compared with those of the state-of-the-art evolutionary algorithms, the average fitness values of the paths and the Mean Square Errors (MSE) obtained by the proposed algorithm are the smallest for the objects in different positions.
Reference | Related Articles | Metrics
Detection method of hard exudates in fundus images by combining local entropy and robust principal components analysis
CHEN Li, CHEN Xiaoyun
Journal of Computer Applications    2019, 39 (7): 2134-2140.   DOI: 10.11772/j.issn.1001-9081.2019010208
Abstract304)      PDF (1062KB)(209)       Save

To solve the time-consuming and error-prone problem in the diagnosis of fundus images by the ophthalmologists, an unsupervised automatic detection method for hard exudates in fundus images was proposed. Firstly, the blood vessels, dark lesion regions and optic disc were removed by using morphological background estimation in preprocessing phase. Then, with the image luminosity channel taken as the initial image, the low rank matrix and sparse matrix were obtained by combining local entropy and Robust Principal Components Analysis (RPCA) based on the locality and sparsity of hard exudates in fundus images. Finally, the hard exudates regions were obtained by the normalized sparse matrix. The performance of the proposed method was tested on the fundus images databases e-ophtha EX and DIARETDB1. The experimental results show that the proposed method can achieve 91.13% of sensitivity and 90% of specificity in the lesional level and 99.03% of accuracy in the image level and 0.5 s of average running time. It can be seen that the proposed method has higher sensitivity and shorter running time compared with Support Vector Machine (SVM) method and K-means method.

Reference | Related Articles | Metrics
Node classification in signed networks based on latent space projection
SHENG Jun, GU Shensheng, CHEN Ling
Journal of Computer Applications    2019, 39 (5): 1411-1415.   DOI: 10.11772/j.issn.1001-9081.2018112559
Abstract399)      PDF (832KB)(406)       Save
Social network node classification is widely used in solving practical problems. Most of the existing network node classification algorithms focus on unsigned social networks,while node classification algorithms on social networks with symbols on edges are rare. Based on the fact that the negative links contribute more on signed network analysis than the positive links. The classification of nodes on signed networks was studied. Firstly, positive and negative networks were projected to the corresponding latent spaces, and a mathematical model was proposed based on positive and negative links in the latent spaces. Then, an iterative algorithm was proposed to optimize the model, and the iterative optimization of latent space matrix and projection matrix was used to classify the nodes in the network. The experimental results on the dataset of the signed social network show that the F1 value of the classification results by the proposed algorithm is higher than 11 on Epinions dataset, and that is higher than 23.8 on Slashdo dataset,which indicate that the proposed algorithm has higher accuracy than random algorithm.
Reference | Related Articles | Metrics
Foreground detection with weighted Schatten- p norm and 3D total variation
CHEN Lixia, LIU Junli, WANG Xuewen
Journal of Computer Applications    2019, 39 (4): 1170-1175.   DOI: 10.11772/j.issn.1001-9081.2018092038
Abstract417)      PDF (811KB)(232)       Save
In view of the fact that the low rank and sparse methods generally regard the foreground as abnormal pixels in the background, which makes the foreground detection precision decrease in the complex scene, a new foreground detection method combining weighted Schatten- p norm with 3D Total Variation (3D-TV) was proposed. Firstly, the observed data were divided into low rank background, moving foreground and dynamic disturbance. Then 3D total variation was used to constrain the moving foreground and strengthen the prior consideration of the spatio-temporal continuity of the foreground objects, effectively suppressing the random disturbance of the anomalous pixels in the discontinuous dynamic background. Finally, the low rank performance of video background was constrained by weighted Schatten- p norm to remove noise interference. The experimental results show that, compared with Robust Principal Component Analysis (RPCA), Higher-order RPCA (HoRPCA) and Tensor RPCA (TRPCA), the proposed model has the highest F-measure value, and the optimal or sub-optimal values of recall and precision. It can be concluded that the proposed model can better overcome the interference in complex scenes, such as dynamic background and severe weather, and its extraction accuracy as well as visual effect of moving objects is improved.
Reference | Related Articles | Metrics
Quantum-inspired migrating birds co-optimization algorithm for lot-streaming flow shop scheduling problem
CHEN Linfeng, QI Xuemei, CHEN Junwen, HUANG Cheng, CHEN Fulong
Journal of Computer Applications    2019, 39 (11): 3250-3256.   DOI: 10.11772/j.issn.1001-9081.2019040700
Abstract540)      PDF (949KB)(244)       Save
A Quantum-inspired Migrating Birds Co-Optimization (QMBCO) algorithm was proposed for minimizing the makespan in Lot-streaming Flow shop Scheduling Problem (LFSP). Firstly, the quantum coding based on Bloch coordinates was applied to expand the solution space. Secondly, an initial solution improvement scheme based on Framinan-Leisten (FL) algorithm was used to makeup the shortage of traditional initial solution and construct the random initial population with high quality. Finally, Migrating Birds Optimization (MBO) and Variable Neighborhood Search (VNS) algorithm were applied for iteration to achieve the information exchange between the worse individuals and superior individuals in proposed algorithm to improve the global search ability. A set of instances with different scales were generated randomly, and QMBCO was compared with Discrete Particle Swarm Optimization (DPSO), MBO and Quantum-inspired Cuckoo Co-Search (QCCS) algorithms on them. Experimental results show that compared with DPSO, MBO and QCCS, QMBCO has the Average Relative Percentage Deviation (ARPD) averagely reduced by 65%, 34% and 24% respectively under two types of running time, verifying the effectiveness and efficiency of the proposed QMBCO algorithm.
Reference | Related Articles | Metrics
Random structure based design method for multiplierless ⅡR digital filters
FENG Shuaidong, CHEN Lijia, LIU Mingguo
Journal of Computer Applications    2018, 38 (9): 2621-2625.   DOI: 10.11772/j.issn.1001-9081.2018030572
Abstract606)      PDF (797KB)(309)       Save
Focused on the issue that the traditional multiplierless Infinite Impulse Response (ⅡR) digital filters have fixed structure and poor performance, a random structure based design method for multiplierless ⅡR digital filters was proposed. Stable 2-order subsystems with shifters were directly used to design the multiplierless filter structure. Firstly, a set of encoding structures of the multiplierless digital filter were created randomly. Then, Differential Evolution with a Successful-Parent-Selecting Framework (SPS-DE) was used to optimize the multiplierless filter structure. The proposed method realized diversified structure design, and SPS-DE effectively balanced exploration and exploitation due to adopting a Successful-Parent-Selecting framework, which achieved good results in the optimization of the multiplierless filter structure. Compared with state-of-the-art design methods, the passband ripple of the multiplierless ⅡR filter designed in this paper is reduced by 43% and the stopband maximum attenuation is decreased by 40.4%. Simulation results show that the multiplierless ⅡR filter designed by the proposed method meets structural requirements and has good performance.
Reference | Related Articles | Metrics
Semantic segmentation of blue-green algae based on deep generative adversarial net
YANG Shuo, CHEN Lifang, SHI Yu, MAO Yiming
Journal of Computer Applications    2018, 38 (6): 1554-1561.   DOI: 10.11772/j.issn.1001-9081.2017122872
Abstract634)      PDF (1306KB)(561)       Save
Concerning the problem of insufficient accuracy of the traditional image segmentation algorithm in segmentation of blue-green alga images, a new network structure named Deep Generative Adversarial Net (DGAN) based on Deep Neural Network (DNN) and Generative Adversarial Net (GAN) was proposed. Firstly, based on Fully Convolutional neural Network (FCN), a 12-layer FCN was constructed as the Generater ( G), which was used to study the distribution of data and generate the segmentation result of blue-green alga images ( Fake). Secondly, a 5-layer Convolutional Neural Network (CNN) was constructed as the Discriminator ( D), which was used to distinguish the segmentation result generated by the generated network ( Fake) and the true segmentation result with manual annotation ( Label), G tried to generate Fake and deceive D, D tried to find out Fake and punish G. Finally, through the adversarial training of two networks, a better segmentation result was obtained because Fake generated by G could cheat D. The training and test results on image sets with 3075 blue-green alga images show that, the proposed DGAN is far ahead of the iterative threshold segmentation algorithm in precision, recall and F 1 score, which are increased by more than 4 percentage points than other DNN algorithms such as FCNNet (SHELHAMER E, LONG J, DARRELL T. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4):640-651) and Deeplab (CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs. Computer Science, 2014(4):357-361). The proposed DGAN has obtained more accurate segmentation results. In the aspect of segmentation speed, the DGAN needs 0.63 s per image, which is slightly slower than the traditional FCNNet with 0.46 s, but much faster than Deeplab with 1.31 s. The balanced segmentation accuracy and speed of DGAN can provide a feasible technical scheme for image-based semantic segmentation of blue-green algae.
Reference | Related Articles | Metrics
Two-level confidence threshold setting method for positive and negative association rules
CHEN Liu, FENG Shan
Journal of Computer Applications    2018, 38 (5): 1315-1319.   DOI: 10.11772/j.issn.1001-9081.2017102469
Abstract630)      PDF (873KB)(329)       Save
Aiming at the problem that traditional confidence threshold setting methods for positive and negative association rules are difficult to limit the number of low-reliability rules and easy to miss some interesting association rules, a new two-level confidence threshold setting method combined with the rule's itemset correlation was proposed, called PNMC-TWO. Firstly, taking into account the consistency, validity and interestingness of rules, under the framework of correlation-support-confidence, on the basis of the computation relationship between rule confidence and itemset support of the rule, the law of confidence of rule changing with support of itemsets of the rule was analyzed systematically. And then, combined with the user's requirement of high confidence and interesting rules in actual mining, a new confidence threshold setting model was proposed to avoid the blindness and randomness of the traditional methods when setting the threshold. Finally, the proposed method was compared with the original two-threshold method in terms of the quantity and quality of the rule. The experimental results show that the new two-level threshold method not only can ensure that the extracted association rules are more effective and interesting, but also can reduce the number of low-reliability rules significantly.
Reference | Related Articles | Metrics
Moving object removal forgery detection algorithm in video frame
YIN Li, LIN Xinqi, CHEN Lifei
Journal of Computer Applications    2018, 38 (3): 879-883.   DOI: 10.11772/j.issn.1001-9081.2017092198
Abstract416)      PDF (862KB)(400)       Save
Aiming at the tampering operation on digital video intra-frame objects, a tamper detection algorithm based on Principal Component Analysis (PCA) was proposed. Firstly, the difference frame obtained by subtracting the detected video frame from the reference frame was denoised by sparse representation method, which reduced the interference of the noise to subsequent feature extraction. Secondly, the denoised video frame was divided into non-overlapping blocks, the pixel features were extracted by PCA to construct eigenvector space. Then, k-means algorithm was used to classify the eigenvector space, and the classification result was expressed by a binary matrix. Finally, the binary morphological image was operated by image morphological operation to obtain the final detection result. The experimental results show that by using the proposed algorithm, the precision and recall are 91% and 100% respectively, and the F1 value is 95.3%, which are better than those the video forgery detection algorithm based on compression perception to some extent. Experimental results show that for the background still video, the proposed algorithm can not only detect the tampering operation to the moving objects in the frame, but also has good robustness to lossy compressed video.
Reference | Related Articles | Metrics
Probability model-based algorithm for non-uniform data clustering
YANG Tianpeng, CHEN Lifei
Journal of Computer Applications    2018, 38 (10): 2844-2849.   DOI: 10.11772/j.issn.1001-9081.2018020375
Abstract647)      PDF (1008KB)(375)       Save
Aiming at the "uniform effect" of the traditional K-means algorithm, a new probability model-based algorithm was proposed for non-uniform data clustering. Firstly, a Gaussian mixture distribution model was proposed to describe the clusters hidden within non-uniform data, allowing the datasets to contain clusters with different densities and sizes at the same time. Secondly, the objective optimization function for non-uniform data clustering was deduced based on the model, and an EM (Expectation Maximization)-type clustering algorithm defined to optimize the objective function. Theoretical analysis shows that the new algorithm is able to perform soft subspace clustering on non-uniform data. Finally, experimental results on synthetic datasets and real datasets demostrate that the accuracy of the proposed algorithm is increased by 5% to 50% compared with the existing K-means-type algorithms and under-sampling algorithms.
Reference | Related Articles | Metrics
Classification of symbolic sequences with multi-order Markov model
CHENG Lingfang, GUO Gongde, CHEN Lifei
Journal of Computer Applications    2017, 37 (7): 1977-1982.   DOI: 10.11772/j.issn.1001-9081.2017.07.1977
Abstract565)      PDF (956KB)(367)       Save
To solve the problem that the existing methods based on the fixed-order Markov models cannot make full use of the structural features involved in the subsequences of different orders, a new Bayesian method based on the multi-order Markov model was proposed for symbolic sequences classification. First, a Conditional Probability Distribution (CPD) model was built based on the multi-order Markov model. Second, a suffix tree for n-order subsequences with efficient suffix-tables and its efficient construction algorithm were proposed, where the algorithm could be used to learn the multi-order CPD models by scanning once the sequence set. A Bayesian classifier was finally proposed for the classification task. The training algorithm was designed to learn the order-weights for the models of different orders based on the Maximum Likelihood (ML) method, while the classification algorithm was defined to carry out the Bayesian prediction using the weighted conditional probabilities of each order. A series of experiments were conducted on real-world sequence sets from three domains and the results demonstrate that the new classifier is insensitive to the predefined order change of the model. Compared with the existing methods such as the support vector machine using the fixed-order model, the proposed method can achieve more than 40% improvement on both gene sequences and speech sequences in terms of classification accuracy, yielding reference values for the optimal order of a Markov model on symbolic sequences.
Reference | Related Articles | Metrics